Suitability of Java for Solving Large Sparse Positive Definite Systems of Linear Equations using Direct Methods
نویسنده
چکیده
The purpose of the thesis is to determine whether Java, a programming language that evolved out of a research project by Sun Microsystems in 1990, is suitable for solving large sparse linear systems using direct methods. That is, can performance comparable to the language traditionally used for sparse matrix computation, Fortran, be achieved by a Java implementation. Performance evaluation criteria include execution speed and memory requirements. A secondary criterion is ease of development. Many attractive features, unique to the Java programming language, make it desirable for use in sparse matrix computation and provide the motivation for the thesis. The ‘write once, run anywhere’ proposition, coupled with nearly-ubiquitous Java support, alleviates the need to re-write programs in the event of hardware change. Features such as garbage collection (automatic recycling of memory) and array-index bounds checking make Java programs more robust than those written in Fortran. Java has garnered a poor reputation as a high-performance computing platform, largely attributable to poor performance relative to Fortran in its early years. It is now a consensus among researchers that the Java language itself is not the problem, but rather its implementation. As such, improving compiler technology for numerical codes is critical to achieving high performance in numerical Java applications. Preliminary work involved converting SPARSPAK, a collection of Fortran 90 subroutines for solving large sparse systems of linear equations and least squares problems developed by Dr. Alan George, into Java (J-SPARSPAK). It is well known that the majority of the solution process is spent in the numeric factorization phase. Initial benchmarks showed Java performing, on average, 3.6 times slower than Fortran for this critical phase. We detail how we improved Java performance to within a factor of two of Fortran.
منابع مشابه
Preconditioned Generalized Minimal Residual Method for Solving Fractional Advection-Diffusion Equation
Introduction Fractional differential equations (FDEs) have attracted much attention and have been widely used in the fields of finance, physics, image processing, and biology, etc. It is not always possible to find an analytical solution for such equations. The approximate solution or numerical scheme may be a good approach, particularly, the schemes in numerical linear algebra for solving ...
متن کاملOn Newton-hss Methods for Systems of Nonlinear Equations with Positive-definite Jacobian Matrices
The Hermitian and skew-Hermitian splitting (HSS) method is an unconditionally convergent iteration method for solving large sparse non-Hermitian positive definite system of linear equations. By making use of the HSS iteration as the inner solver for the Newton method, we establish a class of Newton-HSS methods for solving large sparse systems of nonlinear equations with positive definite Jacobi...
متن کاملThe null-space method and its relationship with matrix factorizations for sparse saddle point systems
The null-space method for solving saddle point systems of equations has long been used to transform an indefinite system into a symmetric positive definite one of smaller dimension. A number of independent works in the literature have identified the equivalence of the null-space method and matrix factorizations. In this report, we review these findings, highlight links between them, and bring t...
متن کاملConjugate gradient method - Wikipedia, the free encyclopedia
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the ...
متن کاملAn Experimental Evaluation of Iterative Solvers for Large SPD Systems of Linear Equations
Direct methods for solving sparse systems of linear equations are fast and robust, but can consume an impractical amount of memory, particularly for large three-dimensional problems. Preconditioned iterative solvers have the potential to solve very large systems with a fraction of the memory used by direct methods. The diversity of preconditioners makes it difficult to analyze them in a unified...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004